cover photo, No photo description available.

Welcome to XAIES, the expert platform for explainable AI solutions and systems.

 

LARGE RESOURCE OF CURATED AND UPDATED INFORMATION ON XAI. PART 2.

 

Beyond XAI ?

https://www.ayasdi.com/beyond-explainability-ai-transparency/

Beyond Explainability - AI Transparency - AyasdiAI

It is now well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to the development of the notion of explainable AI (sometimes called XAI).

-----------------------“”----------------------- “”-----------------------

Model Cards for Model Reporting

https://arxiv.org/pdf/1810.03993.pdf

-----------------------“”----------------------- “”-----------------------

GUIDE TO INTERPRETABLE MACHINE LEARNING

https://www.topbots.com/interpretable-machine-learning/

If you can’t explain it simply, you don’t understand it well enough. — Albert Einstein Disclaimer:

-----------------------“”----------------------- “”-----------------------

Explainability vs interpretability

https://bdtechtalks.com/2020/07/27/black-box-ai-models/

-----------------------“”----------------------- “”-----------------------

Explainable anatomical shape

Spiral: Explainable anatomical shape analysis through deep hierarchical generative models

https://arxiv.org/abs/1907.00058

SPIRAL.IMPERIAL.AC.UK

-----------------------“”----------------------- “”-----------------------

Interpretable policy derivation for reinforcement learning based on evolutionary feature synthesis

LINK.SPRINGER.COM

https://link.springer.com/article/10.1007/s40747-020-00175-y

Reinforcement learning based on the deep neural network has attracted much attention and has been widely used in real-world applications. However, the black-box property limits its usage from applying in high-stake areas, such as manufacture and healthcare.

-----------------------“”----------------------- “”-----------------------

Self-explainable AI

https://bdtechtalks.com/2020/06/15/self-explainable-artificial-intelligence/

The case for self-explainable AI

Scientist Daniel Elton discusses why we need artificial intelligence models that can explain their decisions by themselves as humans do.

-----------------------“”----------------------- “”-----------------------

Evolution of Classifier Confusion on the Instance Level

ARXIV.ORG: https://arxiv.org/abs/2007.11353

-----------------------“”----------------------- “”-----------------------

Deep meta-learning XAI

Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models | IntechOpen

https://www.intechopen.com/online-first/explainable-artificial-intelligence-xai-approaches-and-deep-meta-learning-models

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out.

-----------------------“”----------------------- “”-----------------------

Explaining Deep Neural Networks using Unsupervised Clustering

https://arxiv.org/pdf/2007.07477.pdf

ARXIV.ORG

-----------------------“”----------------------- “”-----------------------

Interactive Studio for Explanatory Model Analysis, This is an R package.

https://cran.r-project.org/web/packages/modelStudio/modelStudio.pdf

CRAN.R-PROJECT.ORG

-----------------------“”----------------------- “”-----------------------

Automated Reasoning for Explainable AI

http://kocoon.gforge.inria.fr/slides/marques-silva.pdf

KOCOON.GFORGE.INRIA.FR

-----------------------“”----------------------- “”-----------------------

The "first" XAI libraries fusion

https://blog.fiddler.ai/2020/07/fiddler-captum-collaborate-on-explainable-ai/

BLOG.FIDDLER.AI

Fiddler Labs

AI with trust, visibility, and insightts built in. Fiddler is a breakthrough AI engine with explainability at its heart.

-----------------------“”----------------------- “”-----------------------

AI perspective on understanding and meaning

BDTECHTALKS.COM

https://bdtechtalks.com/2020/07/13/ai-barrier-meaning-understanding/

AI’s struggle to reach “understanding” and “meaning”

Computer scientist Melanie Mitchell breaks down the key elements that could allow artificial intelligence algorithms to grasp the "meaning" of things.

-----------------------“”----------------------- “”-----------------------

Robust Decision Tree

https://link.springer.com/chapter/10.1007/978-3-030-50153-2_36

Robust Predictive-Reactive Scheduling: An Information-Based Decision Tree Model

LINK.SPRINGER.COM

In this paper we introduce a proactive-reactive approach to deal with uncertain scheduling problems.

-----------------------“”----------------------- “”-----------------------

Yellowbrick directly from Scikit

SCIKIT-YB.ORG

Yellowbrick: Machine Learning Visualization — Yellowbrick v1.1 documentation

Yellowbrick extends the Scikit-Learn API to make model selection and hyperparameter tuning easier. Under the hood, it’s using Matplotlib.

-----------------------“”----------------------- “”-----------------------

Levels of XAI framework

https://link.springer.com/chapter/10.1007/978-3-030-51924-7_6

-----------------------“”----------------------- “”-----------------------

Decision Theory Meets Explainable AI. A CIU without the HAP !

https://link.springer.com/chapter/10.1007/978-3-030-51924-7_4

LINK.SPRINGER.COM

Decision Theory Meets Explainable AI

Explainability has been a core research topic in AI for decades and therefore it is surprising that the current concept of Explainable AI (XAI) seems to have been launched as late as 2016.

-----------------------“”----------------------- “”-----------------------

ExplainX.ai

https://github.com/explainX/explainx

GITHUB.COM

Explain any Black-Box Machine Learning Model with explainX: Fast, Scalable & State-of-the-art Explainable AI Platform. - explainX/explainx

-----------------------“”----------------------- “”-----------------------

Explainable 3D Convolutional Neural Networks by Learning Temporal Transformations

https://deepai.org/publication/explainable-3d-convolutional-neural-networks-by-learning-temporal-transformations

DEEPAI.ORG

In this paper we introduce the temporally factorized 3D convolution (3TConv) as an interpretable alternative to the regular 3D convolutions.

-----------------------“”----------------------- “”-----------------------

XAI package DALEX

https://github.com/ModelOriented/DALEX

GITHUB.COM

moDel Agnostic Language for Exploration and eXplanation - ModelOriented/DALEX

-----------------------“”----------------------- “”-----------------------

AIMLAI, Submission deadline: Jul 22, 2020

https://project.inria.fr/aimlai/

PROJECT.INRIA.FR

Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI)

-----------------------“”----------------------- “”-----------------------

CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus

https://arxiv.org/pdf/2001.02643.pdf

also code available: https://github.com/fkluger/consac

fkluger/consac

CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus - fkluger/consac

-----------------------“”----------------------- “”-----------------------

The four dimensions of contestable AI diagnostics- A patient-centric approach to explainable AI

SCIENCEDIRECT.COM

https://www.sciencedirect.com/science/article/pii/S0933365720301330

The problem of the explainability of AI decision-making has attracted considerable attention in recent years.

-----------------------“”----------------------- “”-----------------------

When Explanations Lie

https://arxiv.org/abs/1912.09818

(code page not found on my computer !)

ARXIV.ORG

When Explanations Lie: Why Many Modified BP Attributions Fail

-----------------------“”----------------------- “”-----------------------

Attack to Explain Deep Representation

https://openaccess.thecvf.com/content_CVPR_2020/papers/Jalwana_Attack_to_Explain_Deep_Representation_CVPR_2020_paper.pdf

OPENACCESS.THECVF.COM

-----------------------“”----------------------- “”-----------------------

Funny title from Google: Neural Networks Are More Productive Teachers Than Human Raters 🙂, the paper is as you might expect related to knowledge distillation from a black box. It is accepted at CVPR taking place this week !

https://arxiv.org/pdf/2003.13960.pdf

ARXIV.ORG

-----------------------“”----------------------- “”-----------------------

InterpretML from Microsoft

https://github.com/interpretml/interpret

GITHUB.COM

Fit interpretable models. Explain blackbox machine learning. - interpretml/interpret

-----------------------“”----------------------- “”-----------------------

SK-MOEFS: A Library in Python for Designing Accurate and Explainable Fuzzy Models

LINK.SPRINGER.COM

SK-MOEFS: A Library in Python for Designing Accurate and Explainable Fuzzy Models

Recently, the explainability of Artificial Intelligence (AI) models and algorithms is becoming an important requirement in real-world applications.

-----------------------“”----------------------- “”-----------------------

Fooling LIME and SHAP

https://arxiv.org/abs/1911.02508

ARXIV.ORG

Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods

-----------------------“”----------------------- “”-----------------------

Explainable cooperative machine learning

https://link.springer.com/article/10.1007/s13218-020-00632-3

LINK.SPRINGER.COM

eXplainable Cooperative Machine Learning with NOVA

In the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA.

-----------------------“”----------------------- “”-----------------------

XAI research job in Rome

https://euraxess.ec.europa.eu/jobs/527048

-----------------------“”----------------------- “”-----------------------

Neural Graph Learning

https://storage.googleapis.com/pub-tools-public-publication-data/pdf/bbd774a3c6f13f05bf754e09aa45e7aa6faa08a8.pdf

STORAGE.GOOGLEAPIS.COM

-----------------------“”----------------------- “”-----------------------

If you want to play around, maybe earn some money:

https://www.innocentive.com/ar/challenge/browse?categoryName=Biology

INNOCENTIVE.COM

InnoCentive Challenge Center

-----------------------“”----------------------- “”-----------------------

LIMEtree>>>>> https://arxiv.org/pdf/2005.01427.pdf

-----------------------“”----------------------- “”-----------------------

 

Explainable AI Through Combination of Deep Tensor and Knowledge Graph

https://www.fujitsu.com/global/documents/about/resources/publications/fstj/archives/vol55-2/paper14.pdf

FUJITSU.COM

-----------------------“”----------------------- “”-----------------------

Master thesis in Quantifying the Performance of Explainability Algorithms, University of Waterloo, 2020

https://uwspace.uwaterloo.ca/bitstream/handle/10012/15922/Lin_ZhongQiu.pdf?sequence=5&isAllowed=y

UWSPACE.UWATERLOO.CA

-----------------------“”----------------------- “”-----------------------

XAI by Topological Hierarchical Decomposition

https://math.osu.edu/events/topology-geometry-and-data-seminar-ryan-kramer

also the paper: https://arxiv.org/abs/1811.10658

Topology, Geometry and Data Seminar - Ryan Kramer

MATH.OSU.EDU

-----------------------“”----------------------- “”-----------------------

A very simple manner to image XAI related to the way our brain thinks

https://hbr.org/2017/05/linear-thinking-in-a-nonlinear-world

-----------------------“”----------------------- “”-----------------------

XAI for COVID-19 classification, actually a RF

https://www.medrxiv.org/node/82227.external-links.html

MEDRXIV.ORG

An Interpretable Machine Learning Framework for Accurate Severe vs Non-severe COVID-19 Clinical Type Classification

Effectively and efficiently diagnosing COVID-19 patients with accurate clinical type is essential to achieve optimal outcomes of the patients as well as reducing the risk of overloading the healthcare system.

-----------------------“”----------------------- “”-----------------------

More Python XAI tools !

https://pyss3.readthedocs.io/en/latest/

Welcome to PySS3’s documentation! — PySS3 0.5.9 documentation

PYSS3.READTHEDOCS.IO

PySS3 is a Python package that allows you to work with The SS3 Classification Model in a very straightforward, interactive and visual way.

-----------------------“”----------------------- “”-----------------------

XAI Critics !

https://medium.com/@rezakhorshidi/unpopular-opinions-on-explainable-ai-1st-out-of-n-which-explanation-6b24eef02b59

-----------------------“”----------------------- “”-----------------------

Interpreting Interpretability !

http://www-personal.umich.edu/~harmank/Papers/CHI2020_Interpretability.pdf

WWW-PERSONAL.UMICH.EDU

-----------------------“”----------------------- “”-----------------------

gshap 0.0.3, latest released !

https://pypi.org/project/gshap/

gshap from PYPI.ORG

A technique in explainable AI for answering broader questions in machine learning.

-----------------------“”----------------------- “”-----------------------

Machine learning-based XAI

https://ieeexplore.ieee.org/document/9007737

IEEEXPLORE.IEEE.ORG

Explainable Machine Learning for Scientific Insights and Discoveries - IEEE Journals & Magazine

-----------------------“”----------------------- “”-----------------------